Distributed Fractional Packing and Maximum Weighted b-Matching via Tail-Recursive Duality

نویسندگان

  • Christos Koufogiannakis
  • Neal E. Young
چکیده

We present efficient distributed δ-approximation algorithms for fractional packing and maximum weighted b-matching in hypergraphs, where δ is the maximum number of packing constraints in which a variable appears (for maximum weighted b-matching δ is the maximum edge degree — for graphs δ = 2). (a) For δ = 2 the algorithm runs in O(logm) rounds in expectation and with high probability. (b) For general δ, the algorithm runs in O(log m) rounds in expectation and with high probability. 1 Background and Results Given a weight vector w ∈ IR+ , a coefficient matrix A ∈ IRn×m + and a vector b ∈ IR+, the fractional packing problem is to compute a vector x ∈ IR+ to maximize ∑m j=1 wjxj and at the same time meet all the constraints ∑m j=1 Aijxj ≤ bi (∀i = 1 . . . n). We use δ to denote the maximum number of packing constraints in which a variable appears, that is, δ = maxj |{i| Aij = 0}|. In the centralized setting, fractional packing can be solved optimally in polynomial time using linear programming. Alternatively, one can use a faster approximation algorithm (i.e. [11]). maximum weighted b-matching on a (hyper)graph is the variant where each Aij ∈ {0, 1} and the solution x must take integer values (without loss of generality each vertex capacity is also integer). An instance is defined by a given hypergraphH(V,E) and b ∈ Z |V | + ; a solution is given by a vector x ∈ Z |E| + maximizing ∑ e∈E wexe and meeting all the vertex capacity constraints ∑ e∈E(u) xe ≤ bu (∀u ∈ V ), where E(u) is the set of edges incident to vertex u. For this problem, n = |V |, m = |E| and δ is the maximum (hyper)edge degree (for graphs δ = 2). maximum weighted b-matching is a cornerstone optimization problem in graph theory and Computer Science. As a special case it includes the ordinary maximum weighted matching problem (bu = 1 for all u ∈ V ). In the centralized setting, maximum weighted b-matching on graphs belongs to the “well-solved class of integer linear programs” in the sense that it can be solved in polynomial time [5,6,19]. Moreover, getting a 2-approximate1 solution for maximum Partially supported by NSF awards CNS-0626912, CCF-0729071. 1 Since it is a maximization problem it is also referred to as a 1/2-approximation. I. Keidar (Ed.): DISC 2009, LNCS 5805, pp. 221–238, 2009. c © Springer-Verlag Berlin Heidelberg 2009 222 C. Koufogiannakis and N.E. Young weighted matching is relatively easy, since the obvious greedy algorithm, which selects the heaviest edge that is not conflicting with already selected edges, gives a 2-approximation. For hypergraphs the problem is NP-hard, since it generalizes set packing, one of Karp’s 21 NP-complete problems [10]. Our results. In this work we present efficient distributed δ-approximation algorithms for the above problems. If the input is a maximum weighted b-matching instance, the algorithms produce integral solutions. The method we use is of particular interest in the distributed setting, where it is the first primal-dual extension of a non-standard local-ratio technique [13,2]. – For fractional packing where each variable appears in at most two constraints (δ = 2), we show a distributed 2-approximation algorithm running in O(logm) rounds in expectation and with high probability. This is the first 2-approximation algorithm requiring only O(logm) rounds. This improves the approximation ratio over the previously best known algorithm [14]. (For a summary of known results see Figure 1.) – For fractional packing where each variable appears in at most δ constraints, we give a distributed δ-approximation algorithm running inO(log m) rounds in expectation and with high probability, wherem is the number of variables. For small δ, this improves over the best previously known constant factor approximation [14], but the running time is slower by a logarithmic-factor. – For maximum weighted b-matching on graphs we give a distributed 2-approximation algorithm running in O(log n) rounds in expectation and with high probability. maximum weighted b-matching generalizes the well studied maximum weighted matching problem. For a 2-approximation, our algorithm is faster by at least a logarithmic factor than any previous algorithm. Specifically, in O(log n) rounds, our algorithm gives the best known approximation ratio. The best previously known algorithms compute a (1+ ε)-approximation in O(ε−4 log n) rounds [17] or in O(ε−2 + ε−1 log(ε−1n) logn) rounds [20]. For a 2-approximation both these algorithms need O(log n) rounds. – For maximum weighted b-matching on hypergraphs with maximum hyperedge degree δ we give a distributed δ-approximation algorithm running in O(log m) rounds in expectation and with high probability, where m is the number of hyperedges. Our result improves over the best previously known O(δ)-approximation ratio by [14], but it is slower by a logarithmic factor. Related work for Maximum Weighted Matching. There are several works considering distributed maximum weighted matching on edge-weighted graphs. Uehara and Chen present a constant time O(Δ)-approximation algorithm [22], where Δ is the maximum vertex degree. Wattenhofer and Wattenhofer improve this result, showing a randomized 5-approximation algorithm taking O(log n) rounds [23]. Hoepman shows a deterministic 2-approximation algorithm taking O(m) rounds [7]. Lotker, Patt-Shamir and Rosén give a randomized (4+ε)-approximation algorithm running in O(ε−1 log ε−1 logn) rounds Distributed Fractional Packing and Maximum Weighted b-Matching 223 problem approx. ratio running time where when max weighted matching on graphs O(Δ) O(1) [22] 2000 5 O(log n) [23] 2004 2 O(m) [7] 2004 O(1)(> 2) O(log n) [14] 2006 (4 + ε) O(ε−1 log ε−1 log n) [18] 2007 (2 + ε) O(log ε−1 log n) [17] 2008 (1 + ε) O(ε−4 log n) [17] 2008 (1 + ε) O(ε−2 + ε−1 log(ε−1n) log n) [20] 2008 2 O(log n) [17,20] (ε = 1) 2008 2 O(log n) here 2009 fractional packing with δ = 2 O(1)(> 2) O(logm) [14] 2006 2 O(logm) here 2009 max weighted matching on hypergraphs O(δ) > δ O(logm) [14] 2006 δ O(log m) here 2009 fractional packing with general δ O(1) > 12 O(logm) [14] 2006 δ O(log m) here 2009 Fig. 1. Distributed algorithms for fractional packing and maximum weighted matching [18]. Lotker, Patt-Shamir and Pettie improve this result to a randomized (2+ε)approximation algorithm taking O(log ε−1 logn) rounds [17]. Their algorithm uses as a black box any distributed constant-factor approximation algorithm for maximum weighted matching which takes O(log n) rounds (i.e. [18]). Moreover, they mention (without details) that there is a distributed (1+ ε)-approximation algorithm taking O(ε−4 log n) rounds, based on the parallel algorithm by Hougardy and Vinkemeier [8]. Nieberg presents a (1 + ε)-approximation algorithm in O(ε−2 + ε−1 log(ε−1n) logn) rounds [20]. The latter two results give randomized 2-approximation algorithms for maximum weighted matching in O(log n) rounds. Related work for Fractional Packing. Kuhn, Moscibroda and Wattenhofer show efficient distributed approximation algorithms for fractional packing [14]. They first show a (1+ ε)-approximation algorithm for fractional packing with logarithmic message size, but the running time depends on the input coefficients. For unbounded message size they show a constant-factor approximation algorithm for fractional packing which takes O(logm) rounds. If an integer solution is desired, then distributed randomized rounding ([15]) can be used. This gives an O(δ)-approximation for maximum weighted b-matching on (hyper)graphs with high probability in O(logm) rounds, where δ is the maximum hyperedge degree (for graphs δ = 2). (The hidden constant factor in the big-O notation of the approximation ratio can be relative large compared to a small δ, say δ = 2). Lower bounds. The best lower bounds known for distributed packing and matching are given by Kuhn, Moscibroda and Wattenhofer [14]. They prove that to achieve a constant or even a poly-logarithmic approximation ratio for fractional maximum matching, any algorithms requires at least Ω( √ logn/ log logn) rounds and Ω(logΔ/ log logΔ), where Δ is the maximum vertex degree. 224 C. Koufogiannakis and N.E. Young Other related work. Forunweightedmaximummatchingon graphs, Israeli and Itai give a randomized distributed 2-approximation algorithm running inO(log n) rounds [9]. Lotker, Patt-Shamir and Pettie improve this result giving a randomized (1+ ε)-approximation algorithm takingO(ε−3 logn) rounds [17]. Czygrinow, Hańćkowiak, and Szymańska show a deterministic 3/2-approximation algorithm which takesO(log n) rounds [4]. A (1+ ε)-approximation for maximumweighted matching on graphs is in NC [8]. The rest of the paper is organized as follows. In Section 2 we describe a nonstandard primal-dual technique to get a δ-approximation algorithm for fractional packing and maximum weighted b-matching. In Section 3 we present the distributed implementation for δ = 2. Then in Section 4 we show the distributed δ-approximation algorithm for general δ. We conclude in Section 5. 2 Covering and Packing Koufogiannakis and Young show sequential and distributed δ-approximation algorithms for general covering problems [13,12], where δ is the maximum number of covering variables on which a covering constraint depends. As a special case their algorithms compute δ-approximate solutions for fractional covering problems of the form min{ ∑n i=1 biyi : ∑n i=1 Aijyi ≥ wj (∀j = 1..m), y ∈ IR n +}. The linear programming dual of such a problem is the following fractional packing problem: max{ ∑m j=1 wjxj : ∑m j=1 Aijxj ≤ bi (∀i = 1 . . . n), x ∈ IR m + }. For packing, δ is the maximum number of packing constraints in which a packing variable appears, δ = maxj |{i| Aij = 0}|. Here we extend the distributed approximation algorithm for fractional covering by [12] to compute δ-approximate solutions for fractional packing using a non-standard primal-dual approach. Notation. Let Cj denote the j-th covering constraint ( ∑n i=1 Aijyi ≥ wj) and Pi denote the i-th packing constraint ( ∑m j=1 Aijxj ≤ bi). Let Vars(S) denote the set of (covering or packing) variable indexes that appear in (covering or packing) constraint S. Let Cons(z) denote the set of (covering or packing) constraint indexes in which (covering or packing) variable z appears. Let N(xs) denote the set of packing variables that appear in the packing constraints in which xs appears, that is, N(xs) = {xj |j ∈ Vars(Pi) for some i ∈ Cons(xs)} = Vars(Cons(xs)). Fractional Covering. First we give a brief description of the δ-approximation algorithm for fractional covering by [13,12]2. The algorithm performs steps to cover non-yet-satisfied covering constraints. Let y be the solution after the first t steps have been performed. (Initially y0 = 0.) Given y, let w j = wj− ∑n i=1 Aijy t i be the slack of Cj after the first t steps. (Initially w0 = w.) The algorithm is given by Alg. 1. 2 The algorithm is equivalent to local-ratio when A ∈ {0, 1}n×m and y ∈ {0, 1} [1,2]. See [13] for a more general algorithm and a discussion on the relation between this algorithm and local ratio. Distributed Fractional Packing and Maximum Weighted b-Matching 225 There may be covering constraints for which the algorithm never performs a step because they are covered by steps done for other constraints with which they share variables. Also note that increasing yi for all i ∈ Vars(Cs), decreases the slacks of all constraints which depend on yi. Our general approach. [13]showsthattheabovealgorithmisaδ-approximation for covering, but theydon’t showany result formatchingorother packingproblems. Our general approach is to recast their analysis as a primal-dual analysis, showing that the algorithm(Alg. 1) implicitly computes a solution to the dual packing problemof interesthere.Todo thisweuse the tail-recursiveapproach implicit inprevious local-ratio analyses [3]. greedy δ-approximation algorithm for fractional covering [13,12] alg. 1 1. Initialize y ← 0, w ← w, t ← 0. 2. While there exist an unsatisfied covering constraint Cs do a step for Cs: 3. Set t = t+ 1. 4. Let βs ← wt−1 s ·mini∈Vars(Cs) bi/Ais. . . .OPT cost to satisfy Cs given the current solution 5. For each i ∈ Vars(Cs): 6. Set y i = y t−1 i + βs/bi. . . . increase yi inversely proportional to its cost 7. For each j ∈ Cons(yi) update w j = wt−1 j − Aijβs/bi. . . . new slacks 8. Return y = y. After the t-th step of the algorithm, define the residual covering problem to be min{ ∑n i=1 biyi : ∑n i=1 Aijyi ≥ w j (∀j = 1..m), y ∈ IR n +} and the residual packing problem to be its dual, max{ ∑m j=1 w t jxj : ∑m j=1 Aijxj ≤ bi (∀i = 1 . . . n), x ∈ IR+ }. The algorithm will compute δ-approximate primal and dual pairs (x, yT−t) for the residual problem for each t. As shown in what follows, the algorithm increments the covering solution x in a forward way, and the packing solution y in a “tail-recursive” manner. Standard Primal-Dual approach does not work. For even simple instances, generating a δ-approximate primal-dual pair for the above greedy algorithm requires a non-standard approach. For example, consider min{y1+y2+y3 : y1 + y2 ≥ 1, y1 + y3 ≥ 5, y1, y2 ≥ 0}. If the greedy algorithm (Alg. 1) does the constraints in either order and chooses β maximally, it gives a solution of cost 10. In the dual max{x12 + 5x13 : x12 + x13 ≤ 1, x12, x13 ≥ 0}, the only way to generate a solution of cost 5 is to set x13 = 1 and x12 = 0. A standard primal-dual approach would raise the dual variable for each covering constraint when that constraint is processed (essentially allowing a dual solution to be generated in an online fashion, constraint by constraint). That can’t work here. For example, if the constraint y1 + y2 ≥ 1 is covered first by setting y1 = y2 = 1, then the dual variable x12 would be increased, thus preventing x13 from reaching 1. Instead, assuming the step to cover y1 + y2 ≥ 1 is done first, the algorithm should not increase any packing variable until a solution to the residual dual problem is computed. After this step the residual primal problem is min{y′ 1 + y′ 2 + y ′ 3 : y ′ 1 + y ′ 2 ≥ −1, y′ 1 + y′ 3 ≥ 4, y′ 1, y′ 2 ≥ 0}, and the residual dual problem 226 C. Koufogiannakis and N.E. Young is max{−x12 + 4x13 : x12 + x13 ≤ 1, x12, x13 ≥ 0}. Once a solution x′ to the residual dual problem is computed (either recursively or as shown later in this section) then the dual variable x′12 for the current covering constraint should be raised maximally, giving the dual solution x for the current problem. In detail, the residual dual solution x′ is x12 = 0 and x ′ 13 = 1 and the cost of the residual dual solution is 4. Then the variable x12 is raised maximally to give x12. However, since x13 = 1, x ′ 12 cannot be increased, thus x = x ′. Although neither dual coordinate is increased at this step, the dual cost is increased from 4 to 5, because the weight of x13 is increased from w′ 13 = 4 to w13 = 5. (See Figure 2 in the appendix.) In what follows we present this formally. Fractional Packing. We show that the greedy algorithm for covering creates an ordering of the covering constraints for which it performs steps, which we can then use to raise the corresponding packing variables. Let tj denote the time3 at which a step to cover Cj was performed. Let tj = 0 if no step was performed for Cj . We define the relation “Cj′ ≺ Cj” on two covering constraints Cj′ and Cj which share a variable and for which the algorithm performed steps to indicate that constraint Cj′ was done first by the algorithm. Definition 1. Let Cj′ ≺ Cj if Vars(Cj′ ) ∩ Vars(Cj) = ∅ and 0 < tj′ < tj. Note that the relation is not defined for covering constraints for which a step was never performed by the algorithm. Then let D be the partially ordered set (poset) of all covering constraints for which the algorithm performed a step, ordered according to “≺”. D is partially ordered because “≺” is not defined for covering constraints that do not share a variable. In addition, since for each covering constraint Cj we have a corresponding dual packing variable xj , abusing notation we write xj′ ≺ xj if Cj′ ≺ Cj . Therefore, D is also a poset of packing variables. Definition 2. A reverse order of poset D is an order Cj1 , Cj2 , . . . , Cjk (or equivalently xj1 , xj2 , . . . , xjk) such that for l > i either we have Cjl ≺ Cji or the relation “≺” is not defined for constraints Cji and Cjl (because they do not share a variable). Then the following figure (Alg. 2) shows the sequential δ-approximation algorithm for fractional packing. The algorithm simply considers the packing variables corresponding to covering constraints that Alg. 1 did steps for, and raises each variable maximally without violating the packing constraints. The order in which the variables are considered matters: the variables should be considered in the reverse of the order in which steps were done for the corresponding constraints, or an order which is “equivalent” (see Lemma 1). (This flexibility is necessary for the distributed setting.) 3 In general by “time” we mean some reasonable way to distinguish in which order steps were performed to satisfy covering constraints. For now, the time at which a step was performed can be thought as the step number (line 3 at Alg. 1). It will be slightly different in the distributed setting. Distributed Fractional Packing and Maximum Weighted b-Matching 227 greedy δ-approximation algorithm for fractional packing alg. 2 1. Run Alg. 1, recording the poset D. 2. Let T be the number of steps performed by Alg. 1. 3. Initialize x ← 0, t ← T . . . . note that t will be decreasing from T to 0 4. Let Π be some reverse order of D. . . . any reverse order of D works, see Lemma 1 5. For each variable xs ∈ D in the order given by Π do: 6. Set xt−1 = x. 7. Raise xt−1 s until a packing constraint that depends on xt−1 s is tight, that is, set xt−1 s = maxi∈Cons(xj)(bi − ∑m j=1 Aijx t−1 j ) . 8. Set t = t− 1. 9. Return x = x. The solution x is feasible at all times since a packing variable is increased only until a packing constraint gets tight. Lemma 1. Alg. 2 returns the same solution x using (at line 4) any reverse order of D. Proof. Let Π = xj1 , xj2 , . . . , xjk and Π ′ = xj1 , xj2 , . . . , xjk be two different reverse orders of D. Let xΠ,1...m be the solution computed so far by Alg. 2 after raising the firstm packing variables of orderΠ . We prove that xΠ,1...k = x ,1...k. Assume that Π and Π ′ have the same order for their first q variables, that is ji = ji for all i ≤ q. Then, xΠ,1...q = x ,1...q. The first variable in which the two orders disagree is the (q + 1)-th one, that is, jq+1 = j′ q+1. Let s = jq+1. Then xs should appear in some position l in Π ′ such that q + 1 < l ≤ k. The value of xs depends only on the values of variables in N(xs) at the time when xs is set. We prove that for each xj ∈ N(xs) we have x j = x Π′,1...l j , thus xΠ,1...q s = x Π′,1...l s . Moreover since the algorithm considers each packing variable only once this implies xΠ,1...k s = x Π,1...q s = x Π′,1...l s = x Π′,1...k s . (a) For each xj ∈ N(xs) with xs ≺ xj , the variable xj should have already been set in the first q steps, otherwise Π would not be a valid reverse order of D. Moreover each packing variable can be increased only once, so once it is set it maintains the same value till the end. Thus, for each xj such that xs ≺ xe, we have x j = x Π′,1...q j = x Π′,1...l j . (b) For eachxj ∈ N(xs)withxj ≺ xs, j cannot be in the interval [j′ q+1, . . . , j′ l−1) ofΠ ′, otherwiseΠ ′ would not be a valid reverse order ofD. Thus, for each xj such that xj ≺ xs, we have x j = x Π′,1...q j = x Π′,1...l j = 0. So in any case, for each xj ∈ N(xs), we have x j = x Π′,1...l j and thus xΠ,1...q s = x ′,1...l s . The lemma follows by induction on the number of edges. The following lemma and weak duality prove that the solution x returned by Alg. 2 is δ-approximate. Lemma 2. For the solutions y and x returned by Alg. 1 and Alg. 2 respectively, ∑m j=1 wjxj ≥ 1/δ ∑n i=1 biyi. 228 C. Koufogiannakis and N.E. Young Proof. Lemma 1 shows that any reverse order of D produces the same solution, so w.l.o.g. here we assume that the reverse order Π used by Alg. 2 is the reverse of the order in which steps to satisfy covering constraints were performed by Alg. 1. When Alg. 1 does a step to satisfy the covering constraint Cs (by increasing yi by βs/bi for all i ∈ Vars(Cs)), the cost of the covering solution ∑ i biyi increases by at most δβs, since Cs depends on at most δ variables (|Vars(Cs)| ≤ δ). Thus the final cost of the cover y is at most ∑ s∈D δβs. Define Ψ t = ∑ j w t jx t j to be the cost of the packing x . Recall that x = 0 so Ψ = 0, and that the final packing solution is given by vector x0, so the the cost of the final packing solution is Ψ0. To prove the theorem we have to show that Ψ0 ≥ ∑ s∈D βs. We have that Ψ 0 = Ψ0 − Ψ = ∑T t=1 Ψ t−1 − Ψ t so it is enough to show that Ψ t−1 − Ψ t ≥ βs where Cs is the covering constraint done at the t-th step of Alg. 1. Then, Ψ t−1 − Ψ t is

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lower and Upper Bounds for Distributed Packing and Covering

We make a step towards understanding the distributed complexity of global optimization problems. We give bounds on the trade-off between locality and achievable approximation ratio of distributed algorithms for packing and covering problems. Extending a result of [9], we show that in k communication rounds, maximum matching and therefore packing problems cannot be approximated better than Ω(n 2...

متن کامل

Linear-in-$Δ$ Lower Bounds in the LOCAL Model

By prior work, there is a distributed graph algorithm that finds a maximal fractional matching (maximal edge packing) in O(∆) rounds, independently of n; here ∆ is the maximum degree of the graph and n is the number of nodes in the graph. We show that this is optimal: there is no distributed algorithm that finds a maximal fractional matching in o(∆) rounds, independently of n. Our work gives th...

متن کامل

Chain Packings and Odd Subtree Packings

A chain packing H in a graph is a subgraph satisfying given degree constraints at the vertices. Its size is the number of odd degree vertices in the subgraph. An odd subtree packing is a chain packing which is a forest in which all non-isolated vertices have odd degree in the forest. We show that for a given graph and degree constraints, the size of a maximum chain packing and a maximum odd sub...

متن کامل

Stabilizing Weighted Graphs

An edge-weighted graph G = (V,E) is called stable if the value of a maximum-weight matching equals the value of a maximum-weight fractional matching. Stable graphs play an important role in some interesting game theory problems, such as network bargaining games and cooperative matching games, because they characterize instances which admit stable outcomes. Motivated by this, in the last few yea...

متن کامل

Greedy Approximation via Duality for Packing, Combinatorial Auctions and Routing

We study simple greedy approximation algorithms for general class of integer packing problems. We provide a novel analysis based on the duality theory of linear programming. This enables to significantly improve on the approximation ratios of these greedy methods, and gives a unified analysis of greedy for many packing problems. We show matching lower bounds on the ratios of such greedy methods...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009